The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
High-fidelity facial avatar reconstruction from a monocular video is a significant research problem in computer graphics and computer vision. Recently, Neural Radiance Field (NeRF) has shown impressive novel view rendering results and has been considered for facial avatar reconstruction. However, the complex facial dynamics and missing 3D information in monocular videos raise significant challenges for faithful facial reconstruction. In this work, we propose a new method for NeRF-based facial avatar reconstruction that utilizes 3D-aware generative prior. Different from existing works that depend on a conditional deformation field for dynamic modeling, we propose to learn a personalized generative prior, which is formulated as a local and low dimensional subspace in the latent space of 3D-GAN. We propose an efficient method to construct the personalized generative prior based on a small set of facial images of a given individual. After learning, it allows for photo-realistic rendering with novel views and the face reenactment can be realized by performing navigation in the latent space. Our proposed method is applicable for different driven signals, including RGB images, 3DMM coefficients, and audios. Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
图像检索已成为一种越来越有吸引力的技术,具有广泛的多媒体应用前景,在该技术中,深层哈希是朝着低存储和有效检索的主要分支。在本文中,我们对深度学习中的度量学习进行了深入的研究,以在多标签场景中建立强大的度量空间,在多标签场景中,两人的损失遭受了高度计算的开销和汇聚难度,而代理损失理论上是无法表达的。深刻的标签依赖性和在构造的超球场空间中表现出冲突。为了解决这些问题,我们提出了一个新颖的度量学习框架,该框架具有混合代理损失(hyt $^2 $损失),该框架构建了具有高效训练复杂性W.R.T.的表现力度量空间。整个数据集。拟议的催眠$^2 $损失着重于通过可学习的代理和发掘无关的数据与数据相关性来优化超晶体空间,这整合了基于成对方法的足够数据对应关系以及基于代理方法的高效效率。在四个标准的多标签基准上进行的广泛实验证明,所提出的方法优于最先进的方法,在不同的哈希片中具有强大的功能,并且以更快,更稳定的收敛速度实现了显着的性能增长。我们的代码可从https://github.com/jerryxu0129/hyp2-loss获得。
translated by 谷歌翻译
推荐系统的目标是通过用户项目的交互历史记录对每个用户和每个项目之间的相关性进行建模,以便最大程度地提高样本得分并最大程度地减少负面样本。当前,两个流行的损失功能被广泛用于优化推荐系统:点心和成对。尽管这些损失功能被广泛使用,但是有两个问题。 (1)这些传统损失功能不适合推荐系统的目标,并充分利用了先验知识信息。 (2)这些传统损失功能的缓慢收敛速度使各种建议模型的实际应用变得困难。为了解决这些问题,我们根据先验知识提出了一个名为“监督个性化排名”(SPR)的新型损失函数。提出的方法通过利用原始数据中每个用户或项目的相互作用历史记录的先验知识来改善BPR损失。与BPR不同,而不是构建<用户,正面项目,负面项目>三元组,而是拟议的SPR构造<用户,相似的用户,正面项目,负面项目,否定项目> Quadruples。尽管SPR非常简单,但非常有效。广泛的实验表明,我们提出的SPR不仅取得了更好的建议性能,而且还可以显着加速收敛速度,从而大大减少所需的训练时间。
translated by 谷歌翻译
在这项工作中,我们提出了一个新的和一般的框架来防御后门攻击,灵感来自攻击触发器通常遵循\ textsc {特定}类型的攻击模式,因此,中毒训练示例在彼此期间对彼此产生更大的影响训练。我们介绍了{\ IT影响图}的概念,它包括分别代表各个训练点和相关的对方式的节点和边缘组成。一对训练点之间的影响代表了去除一个训练点对另一个训练点的影响,由影响函数\ citep {koh2017understanding}近似。通过查找特定大小的最大平均子图来提取恶意训练点。关于计算机视觉和自然语言处理任务的广泛实验证明了所提出的框架的有效性和一般性。
translated by 谷歌翻译
场景文本检测仍然是一个具有挑战性的任务,因为可能存在极小的小或低分辨率的笔划,并且关闭或任意形状的文本。在本文中,提出了通过捕获细粒度的笔划来有效地检测文本,并在图中的分层表示之间推断结构关系。不同于由一系列点或矩形框表示文本区域的现有方法,我们通过笔划辅助预测网络(SAPN)直接本地化每个文本实例的笔划。此外,采用分层关系图网络(HRGN)来执行关系推理和预测链接的可能性,有效地将关闭文本实例和分组节点分类结果分割成任意形状的文本区域。我们介绍了一个小型数据集,其中具有笔划级注释,即SyntheTroke,用于我们模型的脱机预培训。宽范围基准测试的实验验证了我们方法的最先进的性能。我们的数据集和代码将可用。
translated by 谷歌翻译
后门攻击对NLP模型构成了新的威胁。在后门攻击中构建中毒数据的标准策略是将触发器(例如,稀有字)插入所选句子,并将原始标签更改为目标标签。该策略具有从触发器和标签视角轻松检测到的严重缺陷:注入的触发器,通常是一种罕见的单词,导致异常的自然语言表达,因此可以通过防御模型容易地检测到异常的自然语言表达;改变的目标标签会导致误报标记的示例,因此可以通过手动检查容易地检测到。要处理此问题,请在本文中,我们提出了一种新的策略来执行不需要外部触发的文本后门攻击,并且中毒样品被正确标记。拟议策略的核心思想是构建清洁标记的例子,其标签是正确的,但可以导致测试标签在与培训集合融合时的变化。为了产生中毒清洁标记的例子,我们提出了一种基于遗传算法的句子生成模型,以满足文本数据的不可微差特性。广泛的实验表明,拟议的攻击策略不仅有效,而且更重要的是,由于其令人触发和清洁的性质,难以防御。我们的工作标志着在NLP中开发令人触发的攻击策略的第一步。
translated by 谷歌翻译
在本文中,我们通过将任务视为无监督的机器翻译(UMT),提出了一种新的释义范式,该任务是基于这样的假设,即必须有成对的句子在大型未标记的单语单语体中表达相同含义的句子。提出的范式首先将大型未标记的语料库分成多个簇,并使用这些簇对训练多个UMT模型。然后,基于这些UMT模型产生的释义对,可以训练统一的替代模型作为生成释义的最终\ STS模型,可以在无监督的设置中直接用于测试,也可以在列出在标签数据集中在标签数据集中进行测试。有监督的设置。提出的方法提供了基于机器翻译的释义方法的优点,因为它避免了对双语句子对的依赖。它还允许人类干预该模型,以便可以使用不同的过滤标准生成更多不同的释义。对被监督和无监督的设置的现有释义数据集进行了广泛的实验,证明了拟议的范式的有效性。
translated by 谷歌翻译
具有释义生成的长期问题是如何获得可靠的监督信号。在本文中,我们基于假设产生与鉴定相同的上下文相同的含义的两个句子的概率应该是相同的,提出了一种无监督的范例。灵感来自这一基本因的主意,我们提出了一种流水线系统,该系统由基于上下文语言模型的候选候选生成组成,使用评分函数的候选滤波,以及基于所选候选者的释放模型训练。提议的范例提供了现有的释义生成方法的优点:(1)使用上下文规范器在含义上,该模型能够产生大量的高质量释义对; (2)使用人为可解释的评分功能来选择来自候选者的释义对,所提出的框架为开发人员提供了一种与数据生成过程进行干预的通道,导致更可控的模型。不同任务和数据集的实验结果表明,拟议模型在监督和无人监督的设置中的有效性。
translated by 谷歌翻译